276 research outputs found

    The Mariner 5 flight path and its determination from tracking data

    Get PDF
    Mariner 5 flight path and its determination from tracking dat

    On Trees, Chains and Fast Transactions in the Blockchain

    Get PDF

    Efficient Constructions for Almost-everywhere Secure Computation

    Get PDF
    The importance of efficient MPC in today\u27s world needs no retelling. An obvious barebones requirement to execute protocols for MPC is the ability of parties to communicate with each other. Traditionally, we solve this problem by assuming that every pair of parties in the network share a dedicated secure link that enables reliable message transmission. This assumption is clearly impractical as the number of nodes in the network grows, as it has today. In their seminal work, Dwork, Peleg, Pippenger and Upfal introduced the notion of almost-everywhere secure primitives in an effort to model the reality of large scale global networks and study the impact of limited connectivity on the properties of fundamental fault-tolerant distributed tasks. In this model, the underlying communication network is sparse and hence some nodes may not even be in a position to participate in the protocol (all their neighbors may be corrupt, for instance). A protocol for almost everywhere reliable message transmission, which would guarantee that a large subset of the network can transmit messages to each other reliably, implies a protocol for almost-everywhere agreement where nodes are required to agree on a value despite malicious or byzantine behavior of some subset of nodes, and an almost-everywhere agreement protocol implies a protocol almost-everywhere secure MPC that is unconditionally or information-theoretically secure. The parameters of interest are the degree dd of the network, the number tt of corrupted nodes that can be tolerated and the number xx of nodes that the protocol may give up. Prior work achieves d=O(1)d = O(1) for t=O(n/logn)t = O(n/\log n) and d=O(logqn)d = O(\log^{q}n) for t=O(n)t = O(n) for some fixed constant q>1q > 1. In this work, we first derive message protocols which are efficient with respect to the total number of computations done across the network. We use this result to show an abundance of networks with d=O(1)d = O(1) that are resilient to t=O(n)t = O(n) random corruptions. This randomized result helps us build networks which are resistant to worst-case adversaries. In particular, we improve the state of the art in the almost everywhere reliable message transmission problem in the worst-case adversary model by showing the existence of an abundance of networks that satisfy d=O(logn)d = O(\log n) for t=O(n)t = O(n), thus making progress on this question after nearly a decade. Finally, we define a new adversarial model of corruptions that is suitable for networks shared amongst a large group of corporations that: (1) do not trust each other, and (2) may collude, and construct optimal networks achieving d=O(1)d = O(1) for t=O(n)t = O(n) in this model

    A review of elliptical and disc galaxy structure, and modern scaling laws

    Full text link
    A century ago, in 1911 and 1913, Plummer and then Reynolds introduced their models to describe the radial distribution of stars in `nebulae'. This article reviews the progress since then, providing both an historical perspective and a contemporary review of the stellar structure of bulges, discs and elliptical galaxies. The quantification of galaxy nuclei, such as central mass deficits and excess nuclear light, plus the structure of dark matter halos and cD galaxy envelopes, are discussed. Issues pertaining to spiral galaxies including dust, bulge-to-disc ratios, bulgeless galaxies, bars and the identification of pseudobulges are also reviewed. An array of modern scaling relations involving sizes, luminosities, surface brightnesses and stellar concentrations are presented, many of which are shown to be curved. These 'redshift zero' relations not only quantify the behavior and nature of galaxies in the Universe today, but are the modern benchmark for evolutionary studies of galaxies, whether based on observations, N-body-simulations or semi-analytical modelling. For example, it is shown that some of the recently discovered compact elliptical galaxies at 1.5 < z < 2.5 may be the bulges of modern disc galaxies.Comment: Condensed version (due to Contract) of an invited review article to appear in "Planets, Stars and Stellar Systems"(www.springer.com/astronomy/book/978-90-481-8818-5). 500+ references incl. many somewhat forgotten, pioneer papers. Original submission to Springer: 07-June-201

    Therapeutic Benefit of Radial Optic Neurotomy in a Rat Model of Glaucoma

    Get PDF
    Radial optic neurotomy (RON) has been proposed as a surgical treatment to alleviate the neurovascular compression and to improve the venous outflow in patients with central retinal vein occlusion. Glaucoma is characterized by specific visual field defects due to the loss of retinal ganglion cells and damage to the optic nerve head (ONH). One of the clinical hallmarks of glaucomatous neuropathy is the excavation of the ONH. The aim of this work was to analyze the effect of RON in an experimental model of glaucoma in rats induced by intracameral injections of chondroitin sulfate (CS). For this purpose, Wistar rats were bilaterally injected with vehicle or CS in the eye anterior chamber, once a week, for 10 weeks. At 3 or 6 weeks of a treatment with vehicle or CS, RON was performed by a single incision in the edge of the neuro-retinal ring at the nasal hemisphere of the optic disk in one eye, while the contralateral eye was submitted to a sham procedure. Electroretinograms (ERGs) were registered under scotopic conditions and visual evoked potentials (VEPs) were registered with skull-implanted electrodes. Retinal and optic nerve morphology was examined by optical microscopy. RON did not affect the ocular hypertension induced by CS. In eyes injected with CS, a significant decrease of retinal (ERG a- and b-wave amplitude) and visual pathway (VEP N2-P2 component amplitude) function was observed, whereas RON reduced these functional alterations in hypertensive eyes. Moreover, a significant loss of cells in the ganglion cell layer, and Thy-1-, NeuN- and Brn3a- positive cells was observed in eyes injected with CS, whereas RON significantly preserved these parameters. In addition, RON preserved the optic nerve structure in eyes with chronic ocular hypertension. These results indicate that RON reduces functional and histological alterations induced by experimental chronic ocular hypertension

    Smaller Gene Networks Permit Longer Persistence in Fast-Changing Environments

    Get PDF
    The environments in which organisms live and reproduce are rarely static, and as the environment changes, populations must evolve so that phenotypes match the challenges presented. The quantitative traits that map to environmental variables are underlain by hundreds or thousands of interacting genes whose allele frequencies and epistatic relationships must change appropriately for adaptation to occur. Extending an earlier model in which individuals possess an ecologically-critical trait encoded by gene networks of 16 to 256 genes and random or scale-free topology, I test the hypothesis that smaller, scale-free networks permit longer persistence times in a constantly-changing environment. Genetic architecture interacting with the rate of environmental change accounts for 78% of the variance in trait heritability and 66% of the variance in population persistence times. When the rate of environmental change is high, the relationship between network size and heritability is apparent, with smaller and scale-free networks conferring a distinct advantage for persistence time. However, when the rate of environmental change is very slow, the relationship between network size and heritability disappears and populations persist the duration of the simulations, without regard to genetic architecture. These results provide a link between genes and population dynamics that may be tested as the -omics and bioinformatics fields mature, and as we are able to determine the genetic basis of ecologically-relevant quantitative traits

    Efficient Fully Secure Computation via Distributed Zero-Knowledge Proofs

    Get PDF
    Secure computation protocols enable mutually distrusting parties to compute a function of their private inputs while revealing nothing but the output. Protocols with {\em full security} (also known as {\em guaranteed output delivery}) in particular protect against denial-of-service attacks, guaranteeing that honest parties receive a correct output. This feature can be realized in the presence of an honest majority, and significant research effort has gone toward attaining full security with good asymptotic and concrete efficiency. We present an efficient protocol for {\em any constant} number of parties nn, with {\em full security} against t<n/2t<n/2 corrupted parties, that makes a black-box use of a pseudorandom generator. Our protocol evaluates an arithmetic circuit CC over a finite ring RR (either a finite field or R=Z2kR=\Z_{2^k}) with communication complexity of 3t2t+1S+o(S)\frac{3t}{2t+1}S + o(S) RR-elements per party, where SS is the number of multiplication gates in CC (namely, <1.5<1.5 elements per party per gate). This matches the best known protocols for the semi-honest model up to the sublinear additive term. For a small number of parties nn, this improves over a recent protocol of Goyal {\em et al.} (Crypto 2020) by a constant factor for circuits over large fields, and by at least an Ω(logn)\Omega(\log n) factor for Boolean circuits or circuits over rings. Our protocol provides new methods for applying the sublinear-communication distributed zero-knowledge proofs of Boneh {\em et al.}~(Crypto 2019) for compiling semi-honest protocols into fully secure ones, in the more challenging case of t>1t>1 corrupted parties. Our protocol relies on {\em replicated secret sharing} to minimize communication and simplify the mechanism for achieving full security. This results in computational cost that scales exponentially with nn. Our main fully secure protocol builds on a new intermediate honest-majority protocol for verifying the correctness of multiplication triples by making a {\em general} use of distributed zero-knowledge proofs. While this intermediate protocol only achieves the weaker notion of {\em security with abort}, it applies to any linear secret-sharing scheme and provides a conceptually simpler, more general, and more efficient alternative to previous protocols from the literature. In particular, it can be combined with the Fiat-Shamir heuristic to simultaneously achieve logarithmic communication complexity and constant round complexity

    Improved Limits on Scattering of Weakly Interacting Massive Particles from Reanalysis of 2013 LUX Data

    Get PDF
    We present constraints on weakly interacting massive particles (WIMP)-nucleus scattering from the 2013 data of the Large Underground Xenon dark matter experiment, including 1.4×104 kg day of search exposure. This new analysis incorporates several advances: single-photon calibration at the scintillation wavelength, improved event-reconstruction algorithms, a revised background model including events originating on the detector walls in an enlarged fiducial volume, and new calibrations from decays of an injected tritium β source and from kinematically constrained nuclear recoils down to 1.1 keV. Sensitivity, especially to low-mass WIMPs, is enhanced compared to our previous results which modeled the signal only above a 3 keV minimum energy. Under standard dark matter halo assumptions and in the mass range above 4 GeV c-2, these new results give the most stringent direct limits on the spin-independent WIMP-nucleon cross section. The 90% C.L. upper limit has a minimum of 0.6 zb at 33 GeV c-2 WIMP mass

    Radiogenic and muon-induced backgrounds in the LUX dark matter detector

    Get PDF
    The Large Underground Xenon (LUX) dark matter experiment aims to detect rare low-energy interactions from Weakly Interacting Massive Particles (WIMPs). The radiogenic backgrounds in the LUX detector have been measured and compared with Monte Carlo simulation. Measurements of LUX high-energy data have provided direct constraints on all background sources contributing to the background model. The expected background rate from the background model for the 85.3 day WIMP search run is (2.6±0.2stat±0.4sys)×10-3 events keVee-1kg-1day-1 in a 118 kg fiducial volume. The observed background rate is (3.6±0.4stat)×10-3 events keVee-1kg-1day-1 , consistent with model projections. The expectation for the radiogenic background in a subsequent one-year run is presented

    Results on the Spin-Dependent Scattering of Weakly Interacting Massive Particles on Nucleons from the Run 3 Data of the LUX Experiment

    Get PDF
    We present experimental constraints on the spin-dependent WIMP (weakly interacting massive particle)-nucleon elastic cross sections from LUX data acquired in 2013. LUX is a dual-phase xenon time projection chamber operating at the Sanford Underground Research Facility (Lead, South Dakota), which is designed to observe the recoil signature of galactic WIMPs scattering from xenon nuclei. A profile likelihood ratio analysis of 1.4×104 kg day of fiducial exposure allows 90% C.L. upper limits to be set on the WIMP-neutron (WIMP-proton) cross section of σn=9.4×10-41 cm2 (σp=2.9×10-39 cm2) at 33 GeV/c2. The spin-dependent WIMP-neutron limit is the most sensitive constraint to date
    corecore